135 research outputs found

    Complexity without chaos: Plasticity within random recurrent networks generates robust timing and motor control

    Get PDF
    It is widely accepted that the complex dynamics characteristic of recurrent neural circuits contributes in a fundamental manner to brain function. Progress has been slow in understanding and exploiting the computational power of recurrent dynamics for two main reasons: nonlinear recurrent networks often exhibit chaotic behavior and most known learning rules do not work in robust fashion in recurrent networks. Here we address both these problems by demonstrating how random recurrent networks (RRN) that initially exhibit chaotic dynamics can be tuned through a supervised learning rule to generate locally stable neural patterns of activity that are both complex and robust to noise. The outcome is a novel neural network regime that exhibits both transiently stable and chaotic trajectories. We further show that the recurrent learning rule dramatically increases the ability of RRNs to generate complex spatiotemporal motor patterns, and accounts for recent experimental data showing a decrease in neural variability in response to stimulus onset

    Distortions of Subjective Time Perception Within and Across Senses

    Get PDF
    Background: The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood. Methodology/Findings: We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations. Conclusions/Significance: These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions

    Challenges in Complex Systems Science

    Get PDF
    FuturICT foundations are social science, complex systems science, and ICT. The main concerns and challenges in the science of complex systems in the context of FuturICT are laid out in this paper with special emphasis on the Complex Systems route to Social Sciences. This include complex systems having: many heterogeneous interacting parts; multiple scales; complicated transition laws; unexpected or unpredicted emergence; sensitive dependence on initial conditions; path-dependent dynamics; networked hierarchical connectivities; interaction of autonomous agents; self-organisation; non-equilibrium dynamics; combinatorial explosion; adaptivity to changing environments; co-evolving subsystems; ill-defined boundaries; and multilevel dynamics. In this context, science is seen as the process of abstracting the dynamics of systems from data. This presents many challenges including: data gathering by large-scale experiment, participatory sensing and social computation, managing huge distributed dynamic and heterogeneous databases; moving from data to dynamical models, going beyond correlations to cause-effect relationships, understanding the relationship between simple and comprehensive models with appropriate choices of variables, ensemble modeling and data assimilation, modeling systems of systems of systems with many levels between micro and macro; and formulating new approaches to prediction, forecasting, and risk, especially in systems that can reflect on and change their behaviour in response to predictions, and systems whose apparently predictable behaviour is disrupted by apparently unpredictable rare or extreme events. These challenges are part of the FuturICT agenda

    Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks

    Get PDF
    Recurrent neural networks (RNNs) are widely used in computational neuroscience and machine learning applications. In an RNN, each neuron computes its output as a nonlinear function of its integrated input. While the importance of RNNs, especially as models of brain processing, is undisputed, it is also widely acknowledged that the computations in standard RNN models may be an over-simplification of what real neuronal networks compute. Here, we suggest that the RNN approach may be made both neurobiologically more plausible and computationally more powerful by its fusion with Bayesian inference techniques for nonlinear dynamical systems. In this scheme, we use an RNN as a generative model of dynamic input caused by the environment, e.g. of speech or kinematics. Given this generative RNN model, we derive Bayesian update equations that can decode its output. Critically, these updates define a 'recognizing RNN' (rRNN), in which neurons compute and exchange prediction and prediction error messages. The rRNN has several desirable features that a conventional RNN does not have, for example, fast decoding of dynamic stimuli and robustness to initial conditions and noise. Furthermore, it implements a predictive coding scheme for dynamic inputs. We suggest that the Bayesian inversion of recurrent neural networks may be useful both as a model of brain function and as a machine learning tool. We illustrate the use of the rRNN by an application to the online decoding (i.e. recognition) of human kinematics

    Activity in perceptual classification networks as a basis for human subjective time perception

    Get PDF
    Despite being a fundamental dimension of experience, how the human brain generates the perception of time remains unknown. Here, we provide a novel explanation for how human time perception might be accomplished, based on non-temporal perceptual classification processes. To demonstrate this proposal, we build an artificial neural system centred on a feed-forward image classification network, functionally similar to human visual processing. In this system, input videos of natural scenes drive changes in network activation, and accumulation of salient changes in activation are used to estimate duration. Estimates produced by this system match human reports made about the same videos, replicating key qualitative biases, including differentiating between scenes of walking around a busy city or sitting in a cafe or office. Our approach provides a working model of duration perception from stimulus to estimation and presents a new direction for examining the foundations of this central aspect of human experience

    Computational Models of Timing Mechanisms in the Cerebellar Granular Layer

    Get PDF
    A long-standing question in neuroscience is how the brain controls movement that requires precisely timed muscle activations. Studies using Pavlovian delay eyeblink conditioning provide good insight into this question. In delay eyeblink conditioning, which is believed to involve the cerebellum, a subject learns an interstimulus interval (ISI) between the onsets of a conditioned stimulus (CS) such as a tone and an unconditioned stimulus such as an airpuff to the eye. After a conditioning phase, the subject’s eyes automatically close or blink when the ISI time has passed after CS onset. This timing information is thought to be represented in some way in the cerebellum. Several computational models of the cerebellum have been proposed to explain the mechanisms of time representation, and they commonly point to the granular layer network. This article will review these computational models and discuss the possible computational power of the cerebellum

    The location of the axon initial segment affects the bandwidth of spike initiation dynamics

    Get PDF
    The dynamics and the sharp onset of action potential (AP) generation have recently been the subject of intense experimental and theoretical investigations. According to the resistive coupling theory, an electrotonic interplay between the site of AP initiation in the axon and the somato-dendritic load determines the AP waveform. This phenomenon not only alters the shape of AP recorded at the soma, but also determines the dynamics of excitability across a variety of time scales. Supporting this statement, here we generalize a previous numerical study and extend it to the quantification of the input-output gain of the neuronal dynamical response. We consider three classes of multicompartmental mathematical models, ranging from ball-and-stick simplified descriptions of neuronal excitability to 3D-reconstructed biophysical models of excitatory neurons of rodent and human cortical tissue. For each model, we demonstrate that increasing the distance between the axonal site of AP initiation and the soma markedly increases the bandwidth of neuronal response properties. We finally consider the Liquid State Machine paradigm, exploring the impact of altering the site of AP initiation at the level of a neuronal population, and demonstrate that an optimal distance exists to boost the computational performance of the network in a simple classification task. Copyright

    Emergence of Connectivity Motifs in Networks of Model Neurons with Short- and Long-term Plastic Synapses

    Get PDF
    Recent evidence in rodent cerebral cortex and olfactory bulb suggests that short-term dynamics of excitatory synaptic transmission is correlated to stereotypical connectivity motifs. It was observed that neurons with short-term facilitating synapses form predominantly reciprocal pairwise connections, while neurons with short-term depressing synapses form unidirectional pairwise connections. The cause of these structural differences in synaptic microcircuits is unknown. We propose that these connectivity motifs emerge from the interactions between short-term synaptic dynamics (SD) and long-term spike-timing dependent plasticity (STDP). While the impact of STDP on SD was shown in vitro, the mutual interactions between STDP and SD in large networks are still the subject of intense research. We formulate a computational model by combining SD and STDP, which captures faithfully short- and long-term dependence on both spike times and frequency. As a proof of concept, we simulate recurrent networks of spiking neurons with random initial connection efficacies and where synapses are either all short-term facilitating or all depressing. For identical background inputs, and as a direct consequence of internally generated activity, we find that networks with depressing synapses evolve unidirectional connectivity motifs, while networks with facilitating synapses evolve reciprocal connectivity motifs. This holds for heterogeneous networks including both facilitating and depressing synapses. Our study highlights the conditions under which SD-STDP might the correlation between facilitation and reciprocal connectivity motifs, as well as between depression and unidirectional motifs. We further suggest experiments for the validation of the proposed mechanism
    • …
    corecore